Adaptive Batch Size Selection in Active Learning for Regression

نویسندگان

چکیده

Training supervised machine learning models requires labeled examples. A judicious choice of examples is helpful when there a significant cost associated with assigning labels. This article improves upon promising extant method – Batch-mode Expected Model Change Maximization (B-EMCM) for selecting to be regression problems. Specifically, it develops and evaluates alternate strategies adaptively batch size in B-EMCM.<br/> By determining the cumulative error that occurs from estimation stochastic gradient descent, stop criteria each iteration can specified ensure selected candidates are most beneficial model learning. new methodology compared B-EMCM via mean absolute root square over ten iterations benchmarked against data sets.<br/> Using multiple sets metrics across all methods, one variation AB-EMCM, max bound accumulated (AB-EMCM Max), showed best results an adaptive approach. It achieved better squared (RMSE) (MAE) than other nonadaptive methods while reaching result nearly same number as non-adaptive methods.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A batch ensemble approach to active learning with model selection

Optimally designing the location of training input points (active learning) and choosing the best model (model selection)-which have been extensively studied-are two important components of supervised learning. However, these two issues seem to have been investigated separately as two independent problems. If training input points and models are simultaneously optimized, the generalization perf...

متن کامل

Near-optimal Batch Mode Active Learning and Adaptive Submodular Optimization

Active learning can lead to a dramatic reduction in labeling effort. However, in many practical implementations (such as crowdsourcing, surveys, high-throughput experimental design), it is preferable to query labels for batches of examples to be labelled in parallel. While several heuristics have been proposed for batch-mode active learning, little is known about their theoretical performance. ...

متن کامل

Adaptive Batch Size for Safe Policy Gradients

PROBLEM • Monotonically improve a parametric gaussian policy πθ in a continuous MDP, avoiding unsafe oscillations in the expected performance J(θ). • Episodic Policy Gradient: – estimate ∇̂θJ(θ) from a batch of N sample trajectories. – θ′ ← θ+Λ∇̂θJ(θ) • Tune step size α and batch size N to limit oscillations. Not trivial: – Λ: trade-off with speed of convergence← adaptive methods. – N : trade-off...

متن کامل

Active Learning with Model Selection in Linear Regression

Optimally designing the location of training input points (active learning) and choosing the best model (model selection) are two important components of supervised learning and have been studied extensively. However, these two issues seem to have been investigated separately as two independent problems. If training input points and models are simultaneously optimized, the generalization perfor...

متن کامل

Improving importance estimation in pool-based batch active learning for approximate linear regression

Pool-based batch active learning is aimed at choosing training inputs from a 'pool' of test inputs so that the generalization error is minimized. P-ALICE (Pool-based Active Learning using Importance-weighted least-squares learning based on Conditional Expectation of the generalization error) is a state-of-the-art method that can cope with model misspecification by weighting training samples acc...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of mathematical sciences & computational mathematics

سال: 2022

ISSN: ['2644-3368', '2688-8300']

DOI: https://doi.org/10.15864/jmscm.4101